23 research outputs found
Artificial neural networks for image recognition : a study of feature extraction methods and an implementation for handwritten character recognition.
Thesis (M.Sc.)-University of Natal, Pietermaritzburg, 1996.The use of computers for digital image recognition has become quite widespread.
Applications include face recognition, handwriting interpretation and fmgerprint analysis.
A feature vector whose dimension is much lower than the original image data is used to
represent the image. This removes redundancy from the data and drastically cuts the
computational cost of the classification stage. The most important criterion for the
extracted features is that they must retain as much of the discriminatory information
present in the original data. Feature extraction methods which have been used with neural
networks are moment invariants, Zernike moments, Fourier descriptors, Gabor filters and
wavelets. These together with the Neocognitron which incorporates feature extraction
within a neural network architecture are described and two methods, Zernike moments and
the Neocognitron are chosen to illustrate the role of feature extraction in image recognition
Re-imagining health and well-being in low resource African settings using an augmented AI system and a 3D digital twin
In this paper, we discuss and explore the potential and relevance of recent
developments in artificial intelligence (AI) and digital twins for health and
well-being in low-resource African countries. Using an AI systems perspective,
we review emerging trends in AI systems and digital twins and propose an
initial augmented AI system architecture to illustrate how an AI system can
work in conjunction with a 3D digital twin. We highlight scientific knowledge
discovery, continual learning, pragmatic interoperability, and interactive
explanation and decision-making as important research challenges for AI systems
and digital twins.Comment: Submitted to Workshop on AI for Digital Twins and Cyber-physical
applications at IJCAI 2023, August 19--21, 2023, Macau, S.A.
Ontology driven multi-agent systems : an architecture for sensor web applications.
Thesis (Ph.D.)-University of KwaZulu-Natal, 2009.Advances in sensor technology and space science have resulted in the availability of vast quantities of
high quality earth observation data. This data can be used for monitoring the earth and to enhance our
understanding of natural processes. Sensor Web researchers are working on constructing a worldwide
computing infrastructure that enables dynamic sharing and analysis of complex heterogeneous earth observation
data sets. Key challenges that are currently being investigated include data integration; service
discovery, reuse and composition; semantic interoperability; and system dynamism. Two emerging technologies
that have shown promise in dealing with these challenges are ontologies and software agents.
This research investigates how these technologies can be integrated into an Ontology Driven Multi-Agent
System (ODMAS) for the Sensor Web.
The research proposes an ODMAS framework and an implemented middleware platform, i.e. the
Sensor Web Agent Platform (SWAP). SWAP deals with ontology construction, ontology use, and agent
based design, implementation and deployment. It provides a semantic infrastructure, an abstract architecture,
an internal agent architecture and a Multi-Agent System (MAS) middleware platform. Distinguishing
features include: the incorporation of Bayesian Networks to represent and reason about uncertain
knowledge; ontologies to describe system entities such as agent services, interaction protocols and agent
workflows; and a flexible adapter based MAS platform that facilitates agent development, execution and
deployment. SWAP aims to guide and ease the design, development and deployment of dynamic alerting
and monitoring applications. The efficacy of SWAP is demonstrated by two satellite image processing
applications, viz. wildfire detection and monitoring informal settlement. This approach can provide significant
benefits to a wide range of Sensor Web users. These include: developers for deploying agents
and agent based applications; end users for accessing, managing and visualising information provided by
real time monitoring applications, and scientists who can use the Sensor Web as a scientific computing
platform to facilitate knowledge sharing and discovery.
An Ontology Driven Multi-Agent Sensor Web has the potential to forever change the way in which
geospatial data and knowledge is accessed and used. This research describes this far reaching vision,
identifies key challenges and provides a first step towards the vision
An Analysis of Artificial Intelligence Techniques in Multiplayer Online Battle Arena Game Environments
The 3D computer gaming industry is constantly exploring new avenues for creating immersive and engaging environments. One avenue being explored is autonomous control of the behaviour of non-player characters (NPC). This paper reviews and compares existing artificial intelligence (AI) techniques for controlling the behaviour of non-human characters in Multiplayer Online Battle Arena (MOBA) game environments. Two techniques, the fuzzy state machine (FuSM) and the emotional behaviour tree (EBT), were reviewed and compared. In addition, an alternate and simple mechanism to incorporate emotion in a behaviour tree is proposed and tested. Initial tests of the mechanism show that it is a viable and promising mechanism for effectively tracking the emotional state of an NPC and for incorporating emotion in NPC decision making
An Agent Architecture for Knowledge Discovery and Evolution
The abductive theory of method (ATOM) was recently proposed to describe the process that scientists use for knowledge discovery. In this paper we propose an agent architecture for knowledge discovery and evolution (KDE) based on ATOM. The agent incorporates a combination of ontologies, rules and Bayesian networks for representing different aspects of its internal knowledge. The agent uses an external AI service to detect unexpected situations from incoming observations. It then uses rules to analyse the current situation and a Bayesian network for finding plausible explanations for unexpected situations. The architecture is evaluated and analysed on a use case application for monitoring daily household electricity consumption patterns
INVEST: Ontology Driven Bayesian Networks for Investment Decision Making on the JSE
This research proposes an architecture and prototype implementation of a knowledge-based system for automating share evaluation and investment decision making on the Johannesburg Stock Exchange (JSE). The knowledge acquired from an analysis of the investment domain for a value investing approach is represented in an ontology. A Bayesian network, developed using the ontology, is used to capture the complex causal relations between different factors that influence the quality and value of individual shares. The system was found to adequately represent the decision-making process of investment professionals and provided superior returns to selected benchmark JSE indices from 2012 to 2018
A System for a Hand Gesture-Manipulated Virtual Reality Environment
Extensive research has been done using machine learning techniques for hand gesture recognition (HGR) using camera-based devices; such as the Leap Motion Controller (LMC). However, limited research has investigated machine learning techniques for HGR in virtual reality applications (VR). This paper reports on the design, implementation, and evaluation of a static HGR system for VR applications using the LMC. The gesture recognition system incorporated a lightweight feature vector of five normalized tip-to-palm distances and a k-nearest neighbour (kNN) classifier. The system was evaluated in terms of response time, accuracy and usability using a case-study VR stellar data visualization application created in the Unreal Engine 4. An average gesture classification time of 0.057ms with an accuracy of 82.5% was achieved on four distinct gestures, which is comparable with previous results from Sign Language recognition systems. This shows the potential of HGR machine learning techniques applied to VR, which were previously applied to non-VR scenarios such as Sign Language recognition